Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
变压器已成为机器学习的重要主力,并具有许多应用。这需要开发可靠的方法来提高其透明度。已经提出了多种基于梯度信息的多种可解释性方法。我们表明,变压器中的梯度仅在本地反映该函数,因此无法可靠地确定输入特征对预测的贡献。我们将注意力头和分层确定为这种不可靠的解释的主要原因,并提出了通过这些层传播的一种更稳定的方式。我们的建议在理论上和经验上都显示出良好的LRP方法的适当扩展,以克服简单基于梯度的方法的缺乏,并实现先进的解释绩效在广泛的变压器模型和数据集上。
translated by 谷歌翻译
传统上,基于标度律维模型已被用于参数对流换热岩类地行星像地球,火星,水星和金星的内部,以解决二维或三维高保真前插的计算瓶颈。然而,这些在物理它们可以建模(例如深度取决于材料特性),并预测只平均量的量的限制,例如平均温度地幔。我们最近发现,前馈神经网络(FNN),使用了大量的二维模拟可以克服这个限制和可靠地预测整个1D横向平均温度分布的演变,及时为复杂的模型训练。我们现在扩展该方法以预测的完整2D温度字段,它包含在对流结构如热羽状和冷downwellings的形式的信息。使用的地幔热演化的10,525二维模拟数据集火星般的星球,我们表明,深度学习技术能够产生可靠的参数代理人(即代理人即预测仅基于参数状态变量,如温度)底层偏微分方程。我们首先使用卷积自动编码由142倍以压缩温度场,然后使用FNN和长短期存储器网络(LSTM)来预测所述压缩字段。平均起来,FNN预测是99.30%,并且LSTM预测是准确相对于看不见模拟99.22%。在LSTM和FNN预测显示,尽管较低的绝对平均相对精度,LSTMs捕捉血流动力学优于FNNS适当的正交分解(POD)。当求和,从FNN预测和从LSTM预测量至96.51%,相对97.66%到原始模拟的系数,分别与POD系数。
translated by 谷歌翻译
Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text. These results have sparked a renewed interest in the anomaly detection problem and led to the introduction of a great variety of new methods. With the emergence of numerous such methods, including approaches based on generative models, one-class classification, and reconstruction, there is a growing need to bring methods of this field into a systematic and unified perspective. In this review we aim to identify the common underlying principles as well as the assumptions that are often made implicitly by various methods. In particular, we draw connections between classic 'shallow' and novel deep approaches and show how this relation might cross-fertilize or extend both directions. We further provide an empirical assessment of major existing methods that is enriched by the use of recent explainability techniques, and present specific worked-through examples together with practical advice. Finally, we outline critical open challenges and identify specific paths for future research in anomaly detection.
translated by 谷歌翻译
最近的机器学习趋势一直是通过解释自己的预测的能力来丰富学习模式。到目前为止,迄今为止,可解释的AI(XAI)的新兴领域主要集中在监督学习,特别是深度神经网络分类器。然而,在许多实际问题中,未给出标签信息,并且目标是发现数据的基础结构,例如,其群集。虽然存在强大的方法来提取数据中的群集结构,但它们通常不会回答为什么已分配给给定群集的某些数据点的原因。我们提出了一种新的框架,它首次以有效可靠的方式在输入特征方面解释群集分配。它基于小说洞察力,即聚类模型可以被重写为神经网络 - 或“神经化”。然后,所获得的网络的集群预测可以快速准确地归因于输入特征。几个陈列室展示了我们的方法评估学习集群质量的能力,并从分析的数据和表示中提取新颖的见解。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems, e.g., image classification, natural language processing or human action recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method is based on deep Taylor decomposition and efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.
translated by 谷歌翻译
Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels wrt the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of neural network performance.
translated by 谷歌翻译
In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.
translated by 谷歌翻译
课堂学习学习需要可塑性和稳定性,以便在保留过去的知识的同时从新数据中学习。由于灾难性的遗忘,当没有内存缓冲区可用时,在这两个属性之间找到妥协尤其具有挑战性。主流方法需要存储两个深层模型,因为它们使用微调与以前的增量状态的知识蒸馏一起整合了新类。我们提出了一种具有相似数量参数但分布不同的方法,以便在可塑性和稳定性之间找到更好的平衡。遵循已经通过基于转移的增量方法部署的方法,我们在初始状态后冻结了功能提取器。最古老的增量状态的类对这种冷冻提取器进行训练,以确保稳定性。使用部分微调模型预测最近的类别以引入可塑性。我们提出的可塑性层可以纳入任何用于无内存增量学习的基于转移的方法,并将其应用于两种此类方法。评估是通过三个大型数据集进行的。结果表明,与现有方法相比,所有测试的配置中均获得了性能提高。
translated by 谷歌翻译